Goto

Collaborating Authors

 network model


Identification and Inference in Nonlinear Dynamic Network Models

Vallarino, Diego

arXiv.org Machine Learning

We study identification and inference in nonlinear dynamic systems defined on unknown interaction networks. The system evolves through an unobserved dependence matrix governing cross-sectional shock propagation via a nonlinear operator. We show that the network structure is not generically identified, and that identification requires sufficient spectral heterogeneity. In particular, identification arises when the network induces non-exchangeable covariance patterns through heterogeneous amplification of eigenmodes. When the spectrum is concentrated, dependence becomes observationally equivalent to common shocks or scalar heterogeneity, leading to non-identification. We provide necessary and sufficient conditions for identification, characterize observational equivalence classes, and propose a semiparametric estimator with asymptotic theory. We also develop tests for network dependence whose power depends on spectral properties of the interaction matrix. The results apply to a broad class of economic models, including production networks, contagion models, and dynamic interaction systems.



A Statistical Recurrent Model on the Manifold of Symmetric Positive Definite Matrices

Neural Information Processing Systems

In a number of disciplines, the data (e.g., graphs, manifolds) to be analyzed are non-Euclidean in nature. Geometric deep learning corresponds to techniques that generalize deep neural network models to such non-Euclidean spaces. Several recent papers have shown how convolutional neural networks (CNNs) can be extended to learn with graph-based data. In this work, we study the setting where the data (or measurements) are ordered, longitudinal or temporal in nature and live on a Riemannian manifold -- this setting is common in a variety of problems in statistical machine learning, vision and medical imaging. We show how recurrent statistical recurrent network models can be defined in such spaces. We give an efficient algorithm and conduct a rigorous analysis of its statistical properties. We perform extensive numerical experiments demonstrating competitive performance with state of the art methods but with significantly less number of parameters. We also show applications to a statistical analysis task in brain imaging, a regime where deep neural network models have only been utilized in limited ways.



EICIL: Joint Excitatory Inhibitory Cycle Iteration Learning for Deep Spiking Neural Networks

Neural Information Processing Systems

Spiking neural networks (SNNs) have undergone continuous development and extensive research over the decades to improve biological plausibility while optimizing energy efficiency. However, traditional deep SNN training methods have some limitations, and they rely on strategies such as pre-training and fine-tuning, indirect encoding and reconstruction, and approximate gradients. These strategies lack complete training models and lack biocompatibility. To overcome these limitations, we propose a novel learning method named Deep Spiking Neural Networks with Joint Excitatory Inhibition Loop Iterative Learning (EICIL). Inspired by biological neuron signal transmission, this method integrates excitatory and inhibitory behaviors in neurons, organically combining these two behavioral modes into one framework. EICIL significantly improves the biomimicry and adaptability of spiking neuron models and expands the representation space of spiking neurons. Extensive experiments based on EICIL and traditional learning methods show that EICIL outperforms traditional methods on various datasets such as CIFAR10 and CIFAR100, demonstrating the key role of learning methods that integrate both behaviors during training.






TheValue-EquivalencePrinciple forModel-Based ReinforcementLearning SupplementaryMaterial

Neural Information Processing Systems

Moreover, we include an additional result which illustrates a situation in which approximate VE models can outperform the MLEmodel. For each (i,j) pair, the above expression is suggestive of a dot-product between twon m vectors: a combination ofai and cj, and a "flattened" version ofB. Define the former combination of vectors asdij = [ai1cj1,ai1cj2,,aincjm]> Rnm 1, and stack them as rows as: D =[d11,d12,,dnm]> Rk` nm.ToflattenB,simplydefineb=[B11,B12,,Bnm]> Finally notice that the construction ofdij can be thought of as vertically stackingn copies ofcj eachscaledbyadifferententryin ai. This means that scaled copies of bothai and cj can be found by selecting specific groups of indices indij. It follows that ifa1,...,an are linearly independent then so ared1j,...,dnj for any j.